86 research outputs found

    Complexity reduction of the context-tree weighting algorithm : a study for KPN Research

    Get PDF

    Universele bronkodering

    Get PDF

    Low Complexity Sequential Probability Estimation and Universal Compression for Binary Sequences with Constrained Distributions

    Get PDF
    Two low-complexity methods are proposed for sequential probability assignment for binary independent and identically distributed (i.i.d.) individual sequences with empirical distributions whose governing parameters are known to be bounded within a limited interval. The methods can be applied to different problems where fast accurate estimation of the maximizing sequence probability is very essential to minimizing some loss. Such applications include applications in finance, learning, channel estimation and decoding, prediction, and universal compression. The application of the new methods to universal compression is studied, and their universal coding redundancies are analyzed. One of the methods is shown to achieve the minimax redundancy within the inner region of the limited parameter interval. The other method achieves better performance on the region boundaries and is more robust numerically to outliers. Simulation results support the analysis of both methods. While non-asymptotically the gains may be significant over standard methods that maximize the probability over the complete parameter simplex, asymptotic gains are in second order. However, these gains translate to meaningful significant factor gains in other applications, such as financial ones. Moreover, the methods proposed generate estimators that are constrained within a given interval throughout the complete estimation process which are essential to applications such as sequential binary channel crossover estimation. The results for the binary case lay the foundation to studying larger alphabets

    On the principal state method for run-length limited sequences

    Full text link

    Universal Noiseless Compression for Noisy Data

    Full text link
    We study universal compression for discrete data sequences that were corrupted by noise. We show that while, as expected, there exist many cases in which the entropy of these sequences increases from that of the original data, somewhat surprisingly and counter-intuitively, universal coding redundancy of such sequences cannot increase compared to the original data. We derive conditions that guarantee that this redundancy does not decrease asymptotically (in first order) from the original sequence redundancy in the stationary memoryless case. We then provide bounds on the redundancy for coding finite length (large) noisy blocks generated by stationary memoryless sources and corrupted by some speci??c memoryless channels. Finally, we propose a sequential probability estimation method that can be used to compress binary data corrupted by some noisy channel. While there is much benefit in using this method in compressing short blocks of noise corrupted data, the new method is more general and allows sequential compression of binary sequences for which the probability of a bit is known to be limited within any given interval (not necessarily between 0 and 1). Additionally, this method has many different applications, including, prediction, sequential channel estimation, and others

    A fast multiplier over GF (2^n)

    Get PDF
    In this paper we will present a hardware implementation of a GF(2n) polynomial basis multiplier that is twice as fast a the classical multiplier while requiring about 50 % more chip area. We implement a flexible scalar (or point) multiplier for elliptic curve cryptosystems using this multiplier and find that the flexible system performs almost twice as fast as compared with the classical multiplier

    A comparative complexity study of fixed-to-variable length and variable-to-fixed length source codes

    No full text

    Efficient and fast data compression codes for discrete sources with memory

    Get PDF

    Storage Complexity of Source Codes

    No full text

    State dependent coding: how to find the state?

    No full text
    • …
    corecore